Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add filters

Language
Document Type
Year range
1.
Giornale Italiano di Nefrologia ; 39(6):28-32, 2022.
Article in English | Scopus | ID: covidwho-2301771

ABSTRACT

The global coronavirus 2019 (COVID-19) pandemic required vaccination even in children to reduce infection. We report on the development of acute kidney injury (AKI) and minimal change disease (MCD) nephrotic syndrome (NS), shortly after the first injection BNT162b2 COVID-19 vaccine (Pfizer-BioNTech). A 12-year-old previously healthy boy was referred to our hospital with complaints of peripheral edema and nephrotic range proteinuria. Nine days earlier he had received his first injection BNT162b2 COVID-19 vaccine (Pfizer-BioNTech). Seven days after injection, he developed leg edema, which rapidly progressed to anasarca with significant weight gain. On admission, serum creatinine was 1.3 mg/dL and 24-hour urinary protein excretion was 4 grams with fluid overload. As kidney function continued to decline over the next days, empirical steroid treatment and renal replacement therapy with ultrafiltration were started and kidney biopsy was performed. Seven days after steroid therapy, kidney function began to improve, gradually returning to normal. The association of MCD, nephrotic syndrome and AKI hasn't been previously described following the Pfizer-BioNTech COVID-19 vaccine in pediatric population, but this triad has been reported in adults. We need further similar case reports to establish the real incidence of this possible vaccine side effect. © 2022 Società Italiana di Nefrologia - Anno 39 Volume 6 n° 4.

2.
Pediatric Nephrology ; 37(11):2911-2912, 2022.
Article in English | Web of Science | ID: covidwho-2068270
3.
Pediatric Nephrology ; 37(11):2963-2964, 2022.
Article in English | Web of Science | ID: covidwho-2068174
4.
Ieee Computational Intelligence Magazine ; 17(1):72-85, 2022.
Article in English | Web of Science | ID: covidwho-1627274

ABSTRACT

Can satisfactory explanations for complex machine learning models be achieved in high-risk automated decision-making? How can such explanations be integrated into a data protection framework safeguarding a right to explanation? This article explores from an interdisciplinary point of view the connection between existing legal requirements for the explainability of AI systems set out in the General Data Protection Regulation (GDPR) and the current state of the art in the field of explainable AI. It studies the challenges of providing human legible explanations for current and future AI-based decision-making systems in practice, based on two scenarios of automated decision-making in credit scoring risks and medical diagnosis of COVID-19. These scenarios exemplify the trend towards increasingly complex machine learning algorithms in automated decision-making, both in terms of data and models. Current machine learning techniques, in particular those based on deep learning, are unable to make clear causal links between input data and final decisions. This represents a limitation for providing exact, human-legible reasons behind specific decisions, and presents a serious challenge to the provision of satisfactory, fair and transparent explanations. Therefore, the conclusion is that the quality of explanations might not be considered as an adequate safeguard for automated decision-making processes under the GDPR. Accordingly, additional tools should be considered to complement explanations. These could include algorithmic impact assessments, other forms of algorithmic justifications based on broader AI principles, and new technical developments in trustworthy AI. This suggests that eventually all of these approaches would need to be considered as a whole.

5.
FAccT - Proc. ACM Conf. Fairness, Account., Transpar. ; : 549-559, 2021.
Article in English | Scopus | ID: covidwho-1145375

ABSTRACT

Can we achieve an adequate level of explanation for complex machine learning models in high-risk AI applications when applying the EU data protection framework? In this article, we address this question, analysing from a multidisciplinary point of view the connection between existing legal requirements for the explainability of AI systems and the current state of the art in the field of explainable AI. We present a case study of a real-life scenario designed to illustrate the application of an AI-based automated decision making process for the medical diagnosis of COVID-19 patients. The scenario exemplifies the trend in the usage of increasingly complex machine-learning algorithms with growing dimensionality of data and model parameters. Based on this setting, we analyse the challenges of providing human legible explanations in practice and we discuss their legal implications following the General Data Protection Regulation (GDPR). Although it might appear that there is just one single form of explanation in the GDPR, we conclude that the context in which the decision-making system operates requires that several forms of explanation are considered. Thus, we propose to design explanations in multiple forms, depending on: the moment of the disclosure of the explanation (either ex ante or ex post);the audience of the explanation (explanation for an expert or a data controller and explanation for the final data subject);the layer of granularity (such as general, group-based or individual explanations);the level of the risks of the automated decision regarding fundamental rights and freedoms. Consequently, explanations should embrace this multifaceted environment. Furthermore, we highlight how the current inability of complex, deep learning based machine learning models to make clear causal links between input data and final decisions represents a limitation for providing exact, human-legible reasons behind specific decisions. This makes the provision of satisfactorily, fair and transparent explanations a serious challenge. Therefore, there are cases where the quality of possible explanations might not be assessed as an adequate safeguard for automated decision-making processes under Article 22(3) GDPR. Accordingly, we suggest that further research should focus on alternative tools in the GDPR (such as algorithmic impact assessments from Article 35 GDPR or algorithmic lawfulness justifications) that might be considered to complement the explanations of automated decision-making. © 2021 ACM.

SELECTION OF CITATIONS
SEARCH DETAIL